Approximation by Fully Complex Multilayer Perceptrons

نویسندگان

  • Taehwan Kim
  • Tülay Adali
چکیده

We investigate the approximation ability of a multilayer perceptron (MLP) network when it is extended to the complex domain. The main challenge for processing complex data with neural networks has been the lack of bounded and analytic complex nonlinear activation functions in the complex domain, as stated by Liouville's theorem. To avoid the conflict between the boundedness and the analyticity of a nonlinear complex function in the complex domain, a number of ad hoc MLPs that include using two real-valued MLPs, one processing the real part and the other processing the imaginary part, have been traditionally employed. However, since nonanalytic functions do not meet the Cauchy-Riemann conditions, they render themselves into degenerative backpropagation algorithms that compromise the efficiency of nonlinear approximation and learning in the complex vector field. A number of elementary transcendental functions (ETFs) derivable from the entire exponential function e(z) that are analytic are defined as fully complex activation functions and are shown to provide a parsimonious structure for processing data in the complex domain and address most of the shortcomings of the traditional approach. The introduction of ETFs, however, raises a new question in the approximation capability of this fully complex MLP. In this letter, three proofs of the approximation capability of the fully complex MLP are provided based on the characteristics of singularity among ETFs. First, the fully complex MLPs with continuous ETFs over a compact set in the complex vector field are shown to be the universal approximator of any continuous complex mappings. The complex universal approximation theorem extends to bounded measurable ETFs possessing a removable singularity. Finally, it is shown that the output of complex MLPs using ETFs with isolated and essential singularities uniformly converges to any nonlinear mapping in the deleted annulus of singularity nearest to the origin.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Approximation and Multilayer Perceptrons: The Gain Backpropagation Algorithm

A standard general algorithm, the stochastic approximation algorithm of Albert and Gardner [1] , is applied in a new context to compute the weights of a multilayer per ceptron network. This leads to a new algorithm, the gain backpropagation algorithm, which is related to, but significantly different from, the standard backpropagat ion algorith m [2]. Some simulation examples show the potential ...

متن کامل

Approximation by Fully Complex MLP Using Elementary Transcendental Activation Functions

Recently, we have presented ‘fully’ complex multi-layer perceptrons (MLPs) using a subset of complex elementary transcendental functions as the nonlinear activation functions. These functions jointly process the inphase (I) and quadrature (Q) components of data while taking full advantage of well-defined gradients in the error back-propagation. In this paper, the characteristics of these elemen...

متن کامل

Are Rosenblatt multilayer perceptrons more powerfull than sigmoidal multilayer perceptrons? From a counter example to a general result

In the eighties the problem of the lack of an efficient algorithm to train multilayer Rosenblatt perceptrons was solved by sigmoidal neural networks and backpropagation. But should we still try to find an efficient algorithm to train multilayer hardlimit neuronal networks, a task known as a NP-Complete problem? In this work we show that this would not be a waste of time by means of a counter ex...

متن کامل

Training Multilayer Perceptrons with the Extende Kalman Algorithm

A large fraction of recent work in artificial neural nets uses multilayer perceptrons trained with the back-propagation algorithm described by Rumelhart et. a1. This algorithm converges slowly for large or complex problems such as speech recognition, where thousands of iterations may be needed for convergence even with small data sets. In this paper, we show that training multilayer perceptrons...

متن کامل

Geometrical Initialization, Parametrization and Control of Multilayer Perceptrons : Application to Function Approximation 1

| This paper proposes a new method to reduce training time for neural nets used as function approximators. This method relies on a geometrical control of Multilayer Perceptrons (MLP). A geometrical initializa-tion gives rst better starting points for the learning process. A geometrical parametriza-tion achieves then a more stable convergence. During the learning process, a dynamic geometrical c...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Neural computation

دوره 15 7  شماره 

صفحات  -

تاریخ انتشار 2003